24 research outputs found

    Premi Internacional al millor treball de fusió d'imatges de satèl·lit

    Get PDF
    Un grup de recerca multidisciplinari format per Xavier Otazu (Centre de Visió per Computador, Universitat Autònoma de Barcelona), Octavi Fors i Jorge Núñez (ambdós del Departament d'Astronomia i Meteorologia, Universitat de Barcelona) i María González-Audícana (Departament de Projectes i Enginyeria Rural, Universitat Pública de Navarra) ha estat guardonat amb un presitigiós premi internacional per desenvolupar una tècnica de fusió d'imatges obtingudes per satèl·lits.Un grupo de investigación multidisciplinario formado por Xavier Otazu(Centro de Visión per Computador, Universitat Autònoma de Barcelona),Octavi Fors y Jorge Núñez (ambos del Departamento de Astronomía yMeteorología, Universitat de Barcelona) y María González-Audícana(Departamento de Proyectos e Ingeniería Rural, Universidad Pública deNavarra) ha sido galardonado con un presitigioso premio internacionalpor desarrollar una técnica de fusión de imágenes obtenidas porsatélites

    Tecnologia de la UAB millora imatges de satèl·lits

    Get PDF
    Els satèl·lits observen silenciosos els fenòmens naturals que succeeixen al nostre planeta. No obstant, les imatges reproduïdes per aquests ulls mecànics no són tan fiables com voldríem. Per alleugerir aquesta miopia dels satèl·lits, un equip de la UAB proposa un mètode de fusió d'imatges anomenat WiSpeR.Los satélites observan silenciosos los fenómenos naturales que ocurren en nuestro planeta. Sin embargo, las imágenes reproducidas por estos ojos mecánicos no son tan fiables como quisiéramos. Para atenuar esta miopía satelital un equipo de la UAB propone un método de fusión de imágenes llamado WiSpeR

    Which tone-mapping operator is the best? A comparative study of perceptual quality

    Get PDF
    Altres ajuts: CERCA Programme/Generalitat de CatalunyaPublicat sota la llicència Open Access Publishing Agreement, específica d'Optica Publishing Group https://opg.optica.org/submit/review/pdf/CopyrightTransferOpenAccessAgreement-2022-06-27.pdfTone-mapping operators (TMOs) are designed to generate perceptually similar low-dynamic-range images from high-dynamic-range ones. We studied the performance of 15 TMOs in two psychophysical experiments where observers compared the digitally generated tone-mapped images to their corresponding physical scenes. All experiments were performed in a controlled environment, and the setups were designed to emphasize different image properties: in the first experiment we evaluated the local relationships among intensity levels, and in the second one we evaluated global visual appearance among physical scenes and tone-mapped images, which were presented side by side. We ranked the TMOs according to how well they reproduced the results obtained in the physical scene. Our results show that ranking position clearly depends on the adopted evaluation criteria, which implies that, in general, these tone-mapping algorithms consider either local or global image attributes but rarely both. Regarding the question of which TMO is the best, KimKautz ["Consistent tone reproduction," in Proceedings of Computer Graphics and Imaging (2008)] and Krawczyk ["Lightness perception in tone reproduction for high dynamic range images," in Proceedings of Eurographics (2005), p. 3] obtained the better results across the different experiments. We conclude that more thorough and standardized evaluation criteria are needed to study all the characteristics of TMOs, as there is ample room for improvement in future developments

    Multiresolution wavelet framework models brightness induction effects

    Get PDF
    A new multiresolution wavelet model is presented here, which accounts for brightness assimilation and contrast effects in a unified framework, and includes known psychophysical and physiological attributes of the primate visual system (such as spatial frequency channels, oriented receptive fields, contrast sensitivity function, contrast non-linearities, and a unified set of parameters). Like other low-level models, such as the ODOG model [Blakeslee, B., & McCourt, M. E. (1999). A multiscale spatial filtering account of the white effect, simultaneous brightness contrast and grating induction. Vision Research, 39, 4361-4377], this formulation reproduces visual effects such as simultaneous contrast, the White effect, grating induction, the Todorović effect, Mach bands, the Chevreul effect and the Adelson-Logvinenko tile effects, but it also reproduces other previously unexplained effects such as the dungeon illusion, all using a single set of parameters

    A New Wavelet-based Approach for the Automated Treatment of Large Sets of Lunar Occultation Data

    Full text link
    Context: The introduction of infrared arrays for lunar occultations (LO) work and the improvement of predictions based on new deep IR catalogues have resulted in a large increase in sensitivity and in the number of observable occultations.Aims: We provide the means for an automated reduction of large sets of LO data. This frees the user from the tedious task of estimating first-guess parameters for the fit of each LO lightcurve. At the end of the process, ready-made plots and statistics enable the user to identify sources that appear to be resolved or binary, and to initiate their detailed interactive analysis.Methods: The pipeline is tailored to array data, including the extraction of the lightcurves from FITS cubes. Because of its robustness and efficiency, the wavelet transform has been chosen to compute the initial guess of the parameters of the lightcurve fit.Results: We illustrate and discuss our automatic reduction pipeline by analyzing a large volume of novel occultation data recorded at Calar Alto Observatory. The automated pipeline package is available from the authors.Algorithm tested with observations collected at Calar Alto Observatory (Spain). Calar Alto is operated by the German-Spanish Astronomical Center (CAHA)

    Low-level spatiochromatic grouping for saliency estimation

    Get PDF
    We propose a saliency model termed SIM (saliency by induction mechanisms), which is based on a low-level spatiochromatic model that has successfully predicted chromatic induction phenomena. In so doing, we hypothesize that the low-level visual mechanisms that enhance or suppress image detail are also responsible for making some image regions more salient. Moreover, SIM adds geometrical grouplets to enhance complex low-level features such as corners, and suppress relatively simpler features such as edges. Since our model has been fitted on psychophysical chromatic induction data, it is largely nonparametric. SIM outperforms state-of-the-art methods in predicting eye fixations on two datasets and using two metrics

    Saliency estimation using a non-parametric low-level vision model

    Get PDF
    Many successful models for predicting attention in a scene involve three main steps: convolution with a set of filters, a center-surround mechanism and spatial pooling to construct a saliency map. However, integrating spatial information and justifying the choice of various parameter values remain open problems. In this paper we show that an efficient model of color appearance in human vision, which contains a principled selection of parameters as well as an innate spatial pooling mechanism, can be generalized to obtain a saliency model that outperforms state-of-the-art models. Scale integration is achieved by an inverse wavelet transform over the set of scale-weighted center-surround responses. The scale-weighting function (termed ECSF) has been optimized to better replicate psychophysical data on color appearance, and the appropriate sizes of the center-surround inhibition windows have been determined by training a Gaussian Mixture Model on eye-fixation data, thus avoiding ad-hoc parameter selection. Additionally, we conclude that the extension of a color appearance model to saliency estimation adds to the evidence for a common low-level visual front-end for different visual tasks

    Integración del archivo de imágenes de un telescopio robótico al observatorio virtual

    Get PDF
    Aquest document conté originàriament altre material i/o programari només consultable a la Biblioteca de Ciència i Tecnologia.S'ha implementat un servei VO (Virtual Observatori) a les instal lacions del Telescopi TFRM, que permet distribuir les imatges preses amb el telescopi de manera remota i automàtica a qualsevol usuari del servei. El servei està format per un arxiu d'imatges, una aplicació que integra les imatges a l'arxiu y una aplicació que es comunica amb els clients d'VO, rebent peticions i responen segons s'especifica al protocol SIAP (Simple Image Access Protocol).Se ha implementado un servicio de Observatorio Virtual (VO) en las instalaciones del telescopio TFRM, que permite distribuir las imágenes tomadas por el telescopio de una forma remota y automática a cualquier usuario del servicio. El servicio esta formado por un archivo de imágenes, una aplicación que integra las imágenes en el archivo y una aplicación que se comunica con los clientes de IVOA, recibiendo peticiones y respondiendo según se especifica en el protocolo SIAP (Simple Image Access Protocol).We have implemented a Virtual Observatory (VO) service at Telescope Facility in TFRM, which allows distributing the images taken by the telescope in a remote and automatic way to any service user. The service consists of an image archive, an application that integrate images on file and an application that communicates with VO clients, receiving and answering requests as specified in SIAP protocol (Simple Image Access Protocol)

    Wavelet-based Image deconvolution for Wide Field CCD Imagery

    Full text link
    We show how a wavelet-based image adaptive deconvolution algorithm can provide significant improvements in the analysis of wide-field CCD images. To illustrate it, we apply our deconvolution protocol to a set of images from a Baker-Nunn telescope. This f/1 instrument has an outstanding field of view of 4.4°x4.4° with high optical quality offering unique properties to study our deconvolution process and results. In particular, we obtain an estimated gain in limiting magnitude of ΔR∼0.6 mag and in limiting resolution of Δρ∼3.9 arcsec. These results increase the number of targets and the efficiency of the underlying scientific project
    corecore